Transformer-based Denoising Adversarial Variational Entity Resolution

نویسندگان

چکیده

Entity resolution (ER), precisely identifying different representations of the same real-world entities, is critical for data integration. The ER question has been studied many years, and methods have proposed to solve it. Although deep learning achieved good performance in tasks, there are some challenges regarding manual labeling model transfer. This paper proposes a novel model, Transformer-based Denoising Adversarial Variational Resolution (TdavER). For entity embedding, we develop an unsupervised embedding based on denoising autoencoders pre-trained language models, which takes corrupted input as training motivate encoder generate rather stable robust high-quality representations. Furthermore, propose feature transformation adversarial variational ease constraints from data. converts low-level embeddings high-level probability distributions, not constrained by source contain similarity features. To better implement transformation, adopt networks optimize autoencoder’s process help it learn correct posterior distribution. Extensive experiments confirms that our TdavER comparable with current state-of-the-art its transferable.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Denoising Adversarial Autoencoders

Unsupervised learning is of growing interest because it unlocks the potential held in vast amounts of unlabelled data to learn useful representations for inference. Autoencoders, a form of generative model, may be trained by learning to reconstruct unlabelled input data from a latent representation space. More robust representations may be produced by an autoencoder if it learns to recover clea...

متن کامل

Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Variational Autoencoders (VAEs) are expressive latent variable models that can be used to learn complex probability distributions from training data. However, the quality of the resulting model crucially relies on the expressiveness of the inference model. We introduce Adversarial Variational Bayes (AVB), a technique for training Variational Autoencoders with arbitrarily expressive inference mo...

متن کامل

Adversarial Symmetric Variational Autoencoder

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and late...

متن کامل

Adversarial Images for Variational Autoencoders

We investigate adversarial attacks for autoencoders. We propose a procedure that distorts the input image to mislead the autoencoder in reconstructing a completely different target image. We attack the internal latent representations, attempting to make the adversarial input produce an internal representation as similar as possible as the target’s. We find that autoencoders are much more robust...

متن کامل

Generative Adversarial Networks as Variational Training of Energy Based Models

In this paper, we study deep generative models for effective unsupervised learning. We propose VGAN, which works by minimizing a variational lower bound of the negative log likelihood (NLL) of an energy based model (EBM), where the model density p(x) is approximated by a variational distribution q(x) that is easy to sample from. The training of VGAN takes a two step procedure: given p(x), q(x) ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Intelligent Information Systems

سال: 2023

ISSN: ['1573-7675', '0925-9902']

DOI: https://doi.org/10.1007/s10844-022-00773-x